OpenAI’s Sora 2 Found to Generate Misleading Deepfakes with Ease
OpenAI's Sora 2 has demonstrated a concerning ability to fabricate convincing deepfake videos, with a recent NewsGuard study revealing an 80% success rate in generating misinformation. The AI tool produced false narratives, including election tampering, corporate hoaxes, and immigration-related disinformation, all with alarming realism.
The study highlighted five instances where Sora 2 replicated Russian disinformation campaigns, creating fabricated footage of a Moldovan election official destroying ballots, a toddler detained by U.S. immigration, and a fake Coca-Cola Super Bowl announcement. Each video was indistinguishable from genuine content at a glance.
NewsGuard's findings underscore the minimal effort required to produce these deepfakes—no technical expertise is needed, and Sora's watermark can be easily removed. The report arrives amid controversy over OpenAI's AI-generated depictions of public figures like Martin Luther King Jr., raising ethical and security concerns.